I only use free AI like Gemini and ChatGPT, and only for small programs (generally Python or bash, at most a few 100 lines long). It's great as a timesaver, and as a search and learning tool.
The thing is, you have to learn how to precisely tell the AI what you want, if you want to get what you actually want. Then, if you want to modify it, you basically have to prompt the AI to make changes. In the case of e.g. Gemini, once your session is done, you kind of have to start from scratch with the prompts. Hence I record what prompts I use at the start of every script.
Modifying AI output is akin to making changes to the output of a transpiler. Say your compiler takes some high level language and outputs C, and you modify the C. If you want to go back to the high level language, then you have to make your C tweaks all over again. Which is time consuming.
I guess Linus is using this project as a means to have a play with vibe coding on a project where it doesn't matter in a critical way if it works or not. But I think it worth looking at AI prompting as a bunch of new programming languages. You still have to carefully think what you want the computer to do, it's just that the AI saves you a lot of time googling around for packages and looking things up in the docs. Provided you can read and check, or somehow test, what an AI gives you, there's not that much of a problem. It's when you are really vibe coding and don't know how to read the AI output, nor test it for correctness. that the problems seep in.